# Multi-batch optimization
Wav2vec2 Large Xls R 300m Turkish Colab
Apache-2.0
This model is a speech recognition model fine-tuned on the Common Voice Turkish dataset based on facebook/wav2vec2-xls-r-300m, achieving a word error rate of 30.36% on the evaluation set.
Speech Recognition
Transformers

W
dperezjr
96
0
Ai Light Dance Stepmania Ft Wav2vec2 Large Xlsr 53 V6
Apache-2.0
This model is an automatic speech recognition (ASR) model fine-tuned on the GARY109/AI_LIGHT_DANCE - ONSET-STEPMANIA2 dataset based on wav2vec2-large-xlsr-53.
Speech Recognition
Transformers

A
gary109
160
0
84rry Xls R 300M AR
Apache-2.0
This model is a fine-tuned Arabic speech recognition model based on facebook/wav2vec2-xls-r-300m on the Common Voice dataset, achieving a word error rate of 0.5078 on the evaluation set.
Speech Recognition
Transformers

8
84rry
27
0
Wav2vec2 Large Xls R 300m Turkish Colab Common Voice 8 6
Apache-2.0
This is a Turkish speech recognition model based on the wav2vec2 architecture, fine-tuned on the common_voice dataset
Speech Recognition
Transformers

W
husnu
21
0
Wav2vec2 Large Xls R 300m Urdu
Apache-2.0
This model is a fine-tuned version based on facebook/wav2vec2-xls-r-300m, specifically optimized for Urdu speech recognition tasks.
Speech Recognition
Transformers

W
omar47
27
0
Output
Apache-2.0
Automatic speech recognition model fine-tuned on Mozilla Common Voice Portuguese dataset based on facebook/wav2vec2-xls-r-300m
Speech Recognition
Transformers Other

O
tonyalves
28
0
2nd Wav2vec2 L Xls R 300m Turkish Test
Apache-2.0
This model is a fine-tuned speech recognition model based on facebook/wav2vec2-xls-r-300m on the Common Voice Turkish dataset, achieving a word error rate of 0.4444 on the evaluation set.
Speech Recognition
Transformers

2
Khalsuu
29
0
Wav2vec2 Large Xls R 300m Turkish Colab
Apache-2.0
This model is a speech recognition model fine-tuned on the Common Voice Turkish dataset based on facebook/wav2vec2-xls-r-300m, achieving a word error rate of 0.3907 on the evaluation set.
Speech Recognition
Transformers

W
Khalsuu
22
0
Roberta Base 10M 1
RoBERTa series models pretrained on datasets of varying scales (1M-1B tokens), including BASE and MED-SMALL specifications
Large Language Model
R
nyu-mll
13
1
Roberta Base 100M 3
RoBERTa variants pre-trained on datasets ranging from 1M to 1B tokens, including BASE and MED-SMALL specifications, suitable for natural language processing tasks in resource-limited scenarios
Large Language Model
R
nyu-mll
18
0
Roberta Base 100M 1
A RoBERTa base model pre-trained on 1B tokens with a validation perplexity of 3.93, suitable for English text processing tasks.
Large Language Model
R
nyu-mll
63
0
Wav2vec2 Large Xls R Armenian Colab
Apache-2.0
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset, supporting Armenian speech recognition.
Speech Recognition
Transformers

W
lilitket
24
0
Wav2vec2 Large Xls R 300m Hi Wx1
Apache-2.0
This is an automatic speech recognition (ASR) model fine-tuned on the Hindi Common Voice 7.0 dataset, based on Facebook's wav2vec2-xls-r-300m model.
Speech Recognition
Transformers Other

W
DrishtiSharma
18
0
Wav2vec2 Lar Xlsr Finetune Es Col
Apache-2.0
This model is a fine-tuned Spanish (Colombian accent) speech recognition model based on facebook/wav2vec2-large-xlsr-53, achieving a word error rate (WER) of 0.2595 on the evaluation set.
Speech Recognition
Transformers

W
Santiagot1105
26
0
Featured Recommended AI Models